probabilistic graphical model explanation
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.
Review for NeurIPS paper: PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
In this paper, the authors propose a new method to explain GNN. The proposed algorithm PGM-Explainer is technically sounds and novel. Through experiments, the authors demonstrated that the proposed method outperforms GNNexplainer, which is a state-of-the-art GNN explainer. The explanation of GNN is an important research topic and there exist a few methods. Moreover, all reviewers are positive about the paper, and thus, this paper is good to be presented at NeurIPS.
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities.